AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Dialogue quality scoring

# Dialogue quality scoring

RM Mistral 7B
A reward model trained on Mistral-7B for response quality evaluation in Reinforcement Learning from Human Feedback (RLHF) scenarios
Large Language Model Transformers
R
weqweasdas
552
22
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase